Goto

Collaborating Authors

 phase image


Real-time deep learning phase imaging flow cytometer reveals blood cell aggregate biomarkers for haematology diagnostics

Delikoyun, Kerem, Chen, Qianyu, Wei, Liu, Myo, Si Ko, Krell, Johannes, Schlegel, Martin, Kuan, Win Sen, Soong, John Tshon Yit, Schneider, Gerhard, da Costa, Clarissa Prazeres, Knolle, Percy A., Renia, Laurent, Cove, Matthew Edward, Lee, Hwee Kuan, Diepold, Klaus, Hayden, Oliver

arXiv.org Artificial Intelligence

While analysing rare blood cell aggregates remains challenging in automated h aematology, they could markedly advance label - free functional diagnostics. Conventional flow cytometers efficiently perform cell counting with leukocyte differentials but fail to identify aggregates with flagged results, requiring manual reviews. Quantitat ive phase imaging flow cytometry captures detailed aggregate morphologies, but clinical use is hampered by massive data storage and offline processing. Incorporating "hidden" biom arkers into routine haematology panels would significantly improve diagnostics with out flagged results. We present RT - HAD, a n end - to - end deep learning - based image and data processing framework for off - axis digital holographic microscopy (DHM), which combines physics - consistent holographic reconstruction and detection, represent ing each blood cell in a graph to recognize aggregates . RT - HAD processes >30 GB of image data on - the - fly with turnaround time of <1.5 min and error rate of 8.9% in platelet aggregate detection, which matches acceptable laboratory error rates of haematology biomarkers and solves the "big data" challenge for point - of - care diagnostics .


ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography

Gardezi, Syed Jamal Safdar, Aronson, Lucas, Wawrzyn, Peter, Yu, Hongkun, Abel, E. Jason, Shapiro, Daniel D., Lubner, Meghan G., Warner, Joshua, Toia, Giuseppe, Mao, Lu, Tiwari, Pallavi, Wentland, Andrew L.

arXiv.org Artificial Intelligence

Purpose: To develop and evaluate a transformer-based deep learning model for the synthesis of nephrographic phase images in CT urography (CTU) examinations from the unenhanced and urographic phases. Materials and Methods: This retrospective study was approved by the local Institutional Review Board. A dataset of 119 patients (mean $\pm$ SD age, 65 $\pm$ 12 years; 75/44 males/females) with three-phase CT urography studies was curated for deep learning model development. The three phases for each patient were aligned with an affine registration algorithm. A custom model, coined Residual transformer model for Nephrographic phase CT image synthesis (ResNCT), was developed and implemented with paired inputs of non-contrast and urographic sets of images trained to produce the nephrographic phase images, that were compared with the corresponding ground truth nephrographic phase images. The synthesized images were evaluated with multiple performance metrics, including peak signal to noise ratio (PSNR), structural similarity index (SSIM), normalized cross correlation coefficient (NCC), mean absolute error (MAE), and root mean squared error (RMSE). Results: The ResNCT model successfully generated synthetic nephrographic images from non-contrast and urographic image inputs. With respect to ground truth nephrographic phase images, the images synthesized by the model achieved high PSNR (27.8 $\pm$ 2.7 dB), SSIM (0.88 $\pm$ 0.05), and NCC (0.98 $\pm$ 0.02), and low MAE (0.02 $\pm$ 0.005) and RMSE (0.042 $\pm$ 0.016). Conclusion: The ResNCT model synthesized nephrographic phase CT images with high similarity to ground truth images. The ResNCT model provides a means of eliminating the acquisition of the nephrographic phase with a resultant 33% reduction in radiation dose for CTU examinations.


Very loopy belief propagation for unwrapping phase images

Neural Information Processing Systems

Since the discovery that the best error-correcting decoding algo(cid:173) rithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circum(cid:173) stances "loopy belief propagation" is effective for probabilistic infer(cid:173) ence. Despite several theoretical advances in our understanding of loopy belief propagation, to our knowledge, the only problem that has been solved using loopy belief propagation is error-correcting decoding on Gaussian channels. We propose a new representation for the two-dimensional phase unwrapping problem, and we show that loopy belief propagation produces results that are superior to existing techniques. This is an important result, since many imag(cid:173) ing techniques, including magnetic resonance imaging and interfer(cid:173) ometric synthetic aperture radar, produce phase-wrapped images. Interestingly, the graph that we use has a very large number of very short cycles, supporting evidence that a large minimum cycle length is not needed for excellent results using belief propagation.


3D-2D Neural Nets for Phase Retrieval in Noisy Interferometric Imaging

Proppe, Andrew H., Thekkadath, Guillaume, England, Duncan, Bustard, Philip J., Bouchard, Frédéric, Lundeen, Jeff S., Sussman, Benjamin J.

arXiv.org Artificial Intelligence

In recent years, neural networks have been used to solve phase retrieval problems in imaging with superior accuracy and speed than traditional techniques, especially in the presence of noise. However, in the context of interferometric imaging, phase noise has been largely unaddressed by existing neural network architectures. Such noise arises naturally in an interferometer due to mechanical instabilities or atmospheric turbulence, limiting measurement acquisition times and posing a challenge in scenarios with limited light intensity, such as remote sensing. Here, we introduce a 3D-2D Phase Retrieval U-Net (PRUNe) that takes noisy and randomly phase-shifted interferograms as inputs, and outputs a single 2D phase image. A 3D downsampling convolutional encoder captures correlations within and between frames to produce a 2D latent space, which is upsampled by a 2D decoder into a phase image. We test our model against a state-of-the-art singular value decomposition algorithm and find PRUNe reconstructions consistently show more accurate and smooth reconstructions, with a x2.5 - 4 lower mean squared error at multiple signal-to-noise ratios for interferograms with low (< 1 photon/pixel) and high (~100 photons/pixel) signal intensity. Our model presents a faster and more accurate approach to perform phase retrieval in extremely low light intensity interferometry in presence of phase noise, and will find application in other multi-frame noisy imaging techniques.


Calibration-free quantitative phase imaging in multi-core fiber endoscopes using end-to-end deep learning

Sun, Jiawei, Zhao, Bin, Wang, Dong, Wang, Zhigang, Zhang, Jie, Koukourakis, Nektarios, Czarske, Juergen W., Li, Xuelong

arXiv.org Artificial Intelligence

Fiber endoscopes have emerged as a vital tool for highresolution Recent advancements have adopted deep learning techniques microscopic imaging in hard-to-reach areas. In contrast to expedite the QPI image reconstruction process [12, 13]. Moreover, to conventional endoscopes with a typical diameter of extant literature indicates the potential of decrypting an several millimeters, fiber endoscopes, which could be submillimeter encoded phase directly from speckle images utilizing deep learning, thin and flexible [1-5], can pass through the organ's although only in simulated environments [14]. This demonstrates intricate pathways without causing harm inside the body [6], the theoretical possibility of reconstructing the original making them particularly suitable for procedures requiring utmost phase directly from speckle images using deep learning for MCF precision and minimal invasiveness. The reduced size and phase imaging, however, networks trained on simulated data adaptability of fiber endoscopes ensure less discomfort for the can hardly achieve accurate phase reconstructions in real-world patient, leading to quicker recovery times and a lower risk of optical systems.

  Country:
  Genre: Research Report (0.40)
  Industry: Health & Medicine (0.48)

ORRN: An ODE-based Recursive Registration Network for Deformable Respiratory Motion Estimation with Lung 4DCT Images

Liang, Xiao, Lin, Shan, Liu, Fei, Schreiber, Dimitri, Yip, Michael

arXiv.org Artificial Intelligence

Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. This paper presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24mm and 1.26mm, respectively. Additionally, it produces less than 0.001\% unrealistic image folding, and the computation speed is less than 1 second for each CT volume. ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.


CycleQSM: Unsupervised QSM Deep Learning using Physics-Informed CycleGAN

Oh, Gyutaek, Bae, Hyokyoung, Ahn, Hyun-Seo, Park, Sung-Hong, Ye, Jong Chul

arXiv.org Machine Learning

Quantitative susceptibility mapping (QSM) is a useful magnetic resonance imaging (MRI) technique which provides spatial distribution of magnetic susceptibility values of tissues. QSMs can be obtained by deconvolving the dipole kernel from phase images, but the spectral nulls in the dipole kernel make the inversion ill-posed. In recent times, deep learning approaches have shown a comparable QSM reconstruction performance as the classic approaches, despite the fast reconstruction time. Most of the existing deep learning methods are, however, based on supervised learning, so matched pairs of input phase images and the ground-truth maps are needed. Moreover, it was reported that the supervised learning often leads to underestimated QSM values. To address this, here we propose a novel unsupervised QSM deep learning method using physics-informed cycleGAN, which is derived from optimal transport perspective. In contrast to the conventional cycleGAN, our novel cycleGAN has only one generator and one discriminator thanks to the known dipole kernel. Experimental results confirm that the proposed method provides more accurate QSM maps compared to the existing deep learning approaches, and provide competitive performance to the best classical approaches despite the ultra-fast reconstruction.


Learning-based Single-step Quantitative Susceptibility Mapping Reconstruction Without Brain Extraction

Wei, Hongjiang, Cao, Steven, Zhang, Yuyao, Guan, Xiaojun, Yan, Fuhua, Yeom, Kristen W., Liu, Chunlei

arXiv.org Artificial Intelligence

Quantitative susceptibility mapping (QSM) estimates the underlying tissue magnetic susceptibility from MRI gradient-echo phase signal and typically requires several processing steps. These steps involve phase unwrapping, brain volume extraction, background phase removal and solving an ill-posed inverse problem. The resulting susceptibility map is known to suffer from inaccuracy near the edges of the brain tissues, in part due to imperfect brain extraction, edge erosion of the brain tissue and the lack of phase measurement outside the brain. This inaccuracy has thus hindered the application of QSM for measuring the susceptibility of tissues near the brain edges, e.g., quantifying cortical layers and generating superficial venography. To address these challenges, we propose a learning-based QSM reconstruction method that directly estimates the magnetic susceptibility from total phase images without the need for brain extraction and background phase removal, referred to as autoQSM. The neural network has a modified U-net structure and is trained using QSM maps computed by a two-step QSM method. 209 healthy subjects with ages ranging from 11 to 82 years were employed for patch-wise network training. The network was validated on data dissimilar to the training data, e.g. in vivo mouse brain data and brains with lesions, which suggests that the network has generalized and learned the underlying mathematical relationship between magnetic field perturbation and magnetic susceptibility. AutoQSM was able to recover magnetic susceptibility of anatomical structures near the edges of the brain including the veins covering the cortical surface, spinal cord and nerve tracts near the mouse brain boundaries. The advantages of high-quality maps, no need for brain volume extraction and high reconstruction speed demonstrate its potential for future applications.